-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Service rewrite #99
Draft
1ntEgr8
wants to merge
126
commits into
erdos-project:main
Choose a base branch
from
1ntEgr8:service-rewrite
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…ling-simulator into tpch-loader
…it in tpch_loader
1ntEgr8
force-pushed
the
service-rewrite
branch
from
December 3, 2024 20:31
e48ac98
to
7036fcf
Compare
Fixes an issue where if start-master.sh or start-worker.sh exits with a nonzero code, or more generally if an exception happens in Service.__enter__(), run_service_experiments.py hangs and doesn't report the exception.
When the last application is deregistered from the spark service, execute all remaining events from the simulator. This allows the final LOG_STATS event to be processed so we can calculate the SLO attainment. Unlike normal runs of the simulator, a SIMULATOR_END event is not inserted as some tasks might not have finished in the simulator and it's unclear when they will finish. The simulator is patched to allow an empty event queue in Simulator.simulate().
On a TASK_FINISH event, set the task completion time to the time of the event rather than the last time the task was stepped. Resolves a bug in the service where tasks that finish later than the simulator's profiled runtime predicts get assigned the wrong completion time.
This reverts commit a22e406.
We found that deadlines for task graphs weren't consistent between the simulator and Spark even with the same RNG seed being used, due to the fact that EventTime keeps a global RNG it uses for all of its fuzzing and both deadlines and runtime variances are fuzzed. Since in simulator runs, task deadlines are all calculated at the start and runtime variances are calculated later, and in Spark, task deadlines and runtime variances are calculated throughout the experiment lifecycle, different deadline variances are obtained between simulator and Spark runs on the same experiment. Our solution is to pass a unique RNG used just for calculating deadline variances to the fuzzer. This RNG is hardcoded with a seed of 42; this is fine for experiments but it should probably be changed to the random_seed command line flag.
Originally, the service named job graphs in the form Q<query>[<spark-app-id>], where Spark sets the app id to app-<timestamp>-<index>, while TPC-H data loader named job graphs in the form Q<query>[<index>]. This commit changes how the service names job graphs by passing the index as an argument to the TpchQuery Spark application, which will then be forwarded into the Servicer through RegisterTaskGraph as a part of the query name. RegisterTaskGraph then uses the index to name the job graph. This ensures that the job graph names are always the same between a Spark run and a simulator run, irrespective of when the task graphs are actually released during a Spark run (which can be nondeterministic). The intent is to use these names to generate deadlines for the task graphs, so that deadlines are always consistent between Spark and simulator runs. This change requires a corresponding change to tpch-spark to forward the index to the Servicer.
To avoid needless reruns, TetriSchedScheduler does not run the scheduler if there are no tasks which are not scheduled, not part of a task graph that has been previously considered, and not part of a task graph that has been cancelled. We remove this second condition to account for situations in which a task graph is considered and its tasks scheduled, but the tasks failed to be placed (for instance, if another task on the same worker finished late, taking up resources). In such cases, the task graph would not be cancelled and might still be able to be completed, so we need to run the scheduler again to try to schedule the tasks that could not be placed before.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Depends on #97, hence the larger than expected diff.